28 research outputs found

    On the left ventricular remodeling of patients with stenotic aortic valve: A statistical shape analysis

    Get PDF
    The left ventricle (LV) constantly changes its shape and function as a response to patho-logical conditions, and this process is known as remodeling. In the presence of aortic stenosis (AS), the degenerative process is not limited to the aortic valve but also involves the remodeling of LV. Statistical shape analysis (SSA) offers a powerful tool for the visualization and quantification of the geometrical and functional patterns of any anatomic changes. In this paper, a SSA method was devel-oped to determine shape descriptors of the LV under different degrees of AS and thus to shed light on the mechanistic link between shape and function. A total of n = 86 patients underwent computed tomography (CT) for the evaluation of valvulopathy were segmented to obtain the LV surface and then were automatically aligned to a reference template by rigid registrations and transformations. Shape modes of the anatomical LV variation induced by the degree of AS were assessed by principal component analysis (PCA). The first shape mode represented nearly 50% of the total variance of LV shape in our patient population and was mainly associated to a spherical LV geometry. At Pearson’s analysis, the first shape mode was positively correlated to both the end-diastolic volume (p < 0.01, R = 0.814) and end-systolic volume (p < 0.01, and R = 0.922), suggesting LV impairment in patients with severe AS. A predictive model built with PCA-related shape modes achieved better perfor-mance in stratifying the occurrence of adverse events with respect to a baseline model using clinical demographic data as risk predictors. This study demonstrated the potential of SSA approaches to detect the association of complex 3D shape features with functional LV parameters

    Patient-specific analysis of ascending thoracic aortic aneurysm with the living heart human model

    Get PDF
    In ascending thoracic aortic aneurysms (ATAAs), aneurysm kinematics are driven by ventricular traction occurring every heartbeat, increasing the stress level of dilated aortic wall. Aortic elongation due to heart motion and aortic length are emerging as potential indicators of adverse events in ATAAs; however, simulation of ATAA that takes into account the cardiac mechanics is technically challenging. The objective of this study was to adapt the realistic Living Heart Human Model (LHHM) to the anatomy and physiology of a patient with ATAA to assess the role of cardiac motion on aortic wall stress distribution. Patient-specific segmentation and material parameter estimation were done using preoperative computed tomography angiography (CTA) and ex vivo biaxial testing of the harvested tissue collected during surgery. The lumped-parameter model of systemic circulation implemented in the LHHM was refined using clinical and echocardiographic data. The results showed that the longitudinal stress was highest in the major curvature of the aneurysm, with specific aortic quadrants having stress levels change from tensile to compressive in a transmural direction. This study revealed the key role of heart motion that stretches the aortic root and increases ATAA wall tension. The ATAA LHHM is a realistic cardiovascular platform where patient-specific information can be easily integrated to assess the aneurysm biomechanics and potentially support the clinical management of patients with ATAAs

    Left Ventricle Biomechanics of Low-Flow, Low-Gradient Aortic Stenosis: A Patient-Specific Computational Model

    Get PDF
    This study aimed to create an imaging-derived patient-specific computational model of low-flow, low-gradient (LFLG) aortic stenosis (AS) to obtain biomechanics data about the left ventricle. LFLG AS is now a commonly recognized sub-type of aortic stenosis. There remains much controversy over its management, and investigation into ventricular biomechanics may elucidate pathophysiology and better identify patients for valve replacement. ECG-gated cardiac computed tomography images from a patient with LFLG AS were obtained to provide patient-specific geometry for the computational model. Surfaces of the left atrium, left ventricle (LV), and outflow track were segmented. A previously validated multi-scale, multi-physics computational human heart model was adapted to the patient-specific geometry, yielding a model consisting of 91,000 solid elements. This model was coupled to a virtual circulatory system and calibrated to clinically measured parameters from echocardiography and cardiac catheterization data. The simulation replicated key physiologic parameters within 10% of their clinically measured values. Global LV systolic myocardial stress was 7.1 ± 1.8 kPa. Mean stress of the basal, middle, and apical segments were 7.7 ± 1.8 kPa, 9.1 ± 3.8 kPa, and 6.4 ± 0.4 kPa, respectively. This is the first patient-specific computational model of LFLG AS based on clinical imaging. Low myocardial stress correlated with low ejection fraction and eccentric LV remodeling. Further studies are needed to understand how alterations in LV biomechanics correlates with clinical outcomes of AS

    Natural Interaction with Traffic Control Cameras Through Multimodal Interfaces

    Get PDF
    Human-Computer Interfaces have always played a fundamental role in usability and commands’ interpretability of the modern software systems. With the explosion of the Artificial Intelligence concept, such interfaces have begun to fill the gap between the user and the system itself, further evolving in Adaptive User Interfaces (AUI). Meta Interfaces are a further step towards the user, and they aim at supporting the human activities in an ambient interactive space; in such a way, the user can control the surrounding space and interact with it. This work aims at proposing a meta user interface that exploits the Put That There paradigm to enable the user to fast interaction by employing natural language and gestures. The application scenario is a video surveillance control room, in which the speed of actions and reactions is fundamental for urban safety and driver and pedestrian security. The interaction is oriented towards three environments: the first is the control room itself, in which the operator can organize the views of the monitors related to the cameras on site by vocal commands and gestures, as well as conveying the audio on the headset or in the speakers of the room. The second one is related to the control of the video, in order to go back and forth to a particular scene showing specific events, or zoom in/out a particular camera; the third allows the operator to send rescue vehicle in a particular street, in case of need. The gestures data are acquired through a Microsoft Kinect 2 which captures pointing and gestures allowing the user to interact multimodally thus increasing the naturalness of the interaction; the related module maps the movement information to a particular instruction, also supported by vocal commands which enable its execution. Vocal commands are mapped by means of the LUIS (Language Understanding) framework by Microsoft, which helps to yield a fast deploy of the application; furthermore, LUIS guarantees the possibility to extend the dominion related command list so as to constantly improve and update the model. A testbed procedure investigates both the system usability and multimodal recognition performances. Multimodal sentence error rate (intended as the number of incorrectly recognized utterances even for a single item) is around 15%, given by the combination of possible failures both in the ASR and gesture recognition model. However, intent classification performances present, on average across different users, accuracy ranging around 89–92% thus indicating that most of the errors in multimodal sentences lie on the slot filling task. Usability has been evaluated through task completion paradigm (including interaction duration and activity on affordances counts per task), learning curve measurements, a posteriori questionnaires

    A dialogue system for multimodal human-robot interaction

    No full text
    This paper presents a POMDP-based dialogue system for multimodal human-robot interaction (HRI). Our aim is to exploit a dialogical paradigm to allow a natural and robust interaction between the human and the robot. The proposed dialogue system should improve the robustness and the flexibility of the overall interactive system, including multimodal fusion, interpretation, and decision-making. The dialogue is represented as a Partially Observable Markov Decision Process (POMDPs) to cast the inherent communication ambiguity and noise into the dialogue model. POMDPs have been used in spoken dialogue systems, mainly for tourist information services, but their application to multimodal human-robot interaction is novel. This paper presents the proposed model for dialogue representation and the methodology used to compute a dialogue strategy. The whole architecture has been integrated on a mobile robot platform and has bee n tested in a human-robot interaction scenario to assess the overall performances with respect to baseline controllers. © 2013 ACM
    corecore